Search results for "Algorithmic learning theory"

showing 10 items of 10 documents

Learning from good examples

1995

The usual information in inductive inference for the purposes of learning an unknown recursive function f is the set of all input /output examples (n,f(n)), n ∈ ℕ. In contrast to this approach we show that it is considerably more powerful to work with finite sets of “good” examples even when these good examples are required to be effectively computable. The influence of the underlying numberings, with respect to which the learning problem has to be solved, to the capabilities of inference from good examples is also investigated. It turns out that nonstandard numberings can be much more powerful than Godel numberings.

AlgebraTransduction (machine learning)Inductive transferComputational learning theoryInductive biasbusiness.industryAlgorithmic learning theoryUnsupervised learningMulti-task learningArtificial intelligenceInstance-based learningbusinessMathematics
researchProduct

Adaptive and Generative Learning: Implications from Complexity Theories

2008

One of the most important classical typologies within the organizational learning literature is the distinction between adaptive and generative learning. However, the processes of these types of learning, particularly the latter, have not been widely analyzed and incorporated into the organizational learning process. This paper puts forward a new understanding of adaptive and generative learning within organizations, grounded in some ideas from complexity theories: mainly self-organization and implicate order. Adaptive learning involves any improvement or development of the explicate order through a process of self-organization. Self-organization is a self-referential process characterized …

Cognitive scienceCooperative learningbusiness.industryComputer scienceStrategy and ManagementAlgorithmic learning theoryGeneral Decision SciencesExperiential learningLearning sciencesGenerative modelManagement of Technology and InnovationOrganizational learningAdaptive learningbusinessAction learningInternational Journal of Management Reviews
researchProduct

Learning formulae from elementary facts

1997

Since the seminal paper by E.M. Gold [Gol67] the computational learning theory community has been presuming that the main problem in the learning theory on the recursion-theoretical level is to restore a grammar from samples of language or a program from its sample computations. However scientists in physics and biology have become accustomed to looking for interesting assertions rather than for a universal theory explaining everything.

Computational learning theoryGrammarSample exclusion dimensionmedia_common.quotation_subjectAlgorithmic learning theoryMathematics educationLearning theoryReinforcement learningSample (statistics)Inductive reasoningmedia_commonMathematics
researchProduct

Organized Learning Models (Pursuer Control Optimisation)

1982

Abstract The concept of Organized Learning is defined, and some random models are presented. For Not Transferable Learning, it is necessary to start from an instantaneous learning; by a discrete way, we must form a stochastic model considering the probability of each path; with a continue aproximation, we can study the evolution of the internal state through to consider the relative and absolute probabilities, by means of differential equations systems. For Transferable Learning, the instantaneous learning give us directly the System evolution. So, the Algoritmes for the different models are compared.

Computer Science::Machine LearningComputational learning theoryWake-sleep algorithmActive learning (machine learning)business.industryComputer scienceCompetitive learningAlgorithmic learning theoryStability (learning theory)Online machine learningPursuerArtificial intelligencebusinessIFAC Proceedings Volumes
researchProduct

Implicit learning

2008

International audience; All of us have learned much about language, music, physical or social environment, and other complex domains, out of any intentional attempts to acquire information. This chapter describes first how studies investigating this form of learning in laboratory situations have shifted from a rule-based interpretation to interpretations assuming a progressive tuning to the statistical regularities of the environment. The next section examines the potential of statistical learning, and whether statistical learning stems from statistical computations or chunk formation. Then the acceptations in which this form of learning may be qualified as implicit are analysed. Finally, i…

Computer sciencemedia_common.quotation_subjectcomputer.software_genre050105 experimental psychology03 medical and health sciences[SCCO]Cognitive science0302 clinical medicine0501 psychology and cognitive sciencesInstance-based learningmedia_commonCognitive scienceGrammarbusiness.industryAlgorithmic learning theoryInterpretation (philosophy)05 social sciencesPsychological nativism[SCCO] Cognitive scienceImplicit learningAssociative learningArtificial intelligenceSequence learningbusinesscomputer030217 neurology & neurosurgeryNatural language processing
researchProduct

Learning Processes in the Control Theory

1994

Error-driven learningArts and Humanities (miscellaneous)Control theorybusiness.industryAlgorithmic learning theoryDevelopmental and Educational PsychologyReinforcement learningArtificial intelligencebusinessPsychologyAction learningApplied PsychologyApplied Psychology
researchProduct

Probabilistic and team PFIN-type learning: General properties

2008

We consider the probability hierarchy for Popperian FINite learning and study the general properties of this hierarchy. We prove that the probability hierarchy is decidable, i.e. there exists an algorithm that receives p_1 and p_2 and answers whether PFIN-type learning with the probability of success p_1 is equivalent to PFIN-type learning with the probability of success p_2. To prove our result, we analyze the topological structure of the probability hierarchy. We prove that it is well-ordered in descending ordering and order-equivalent to ordinal epsilon_0. This shows that the structure of the hierarchy is very complicated. Using similar methods, we also prove that, for PFIN-type learning…

FOS: Computer and information sciencesComputer Science::Machine LearningTheoretical computer scienceComputer Networks and CommunicationsExistential quantificationStructure (category theory)DecidabilityType (model theory)Learning in the limitTheoretical Computer ScienceMachine Learning (cs.LG)Probability of successFinite limitsMathematicsOrdinalsDiscrete mathematicsHierarchybusiness.industryApplied MathematicsAlgorithmic learning theoryProbabilistic logicF.1.1 I.2.6Inductive inferenceInductive reasoningDecidabilityComputer Science - LearningTeam learningComputational Theory and MathematicsArtificial intelligencebusinessJournal of Computer and System Sciences
researchProduct

On the design of effective learning materials for supporting self-directed learning of programming

2012

This paper reports on the action research that studies how to implement self-directed learning of programming in the academic context. Based on our findings from the previous steps with this research agenda, we focus on the design of learning materials. That is, we aim to facilitate the students' self-directed learning by developing illustrative and concise materials that the students could use to efficiently develop theoretical understanding of the learning topics. In designing the materials, we will rely on the cognitive load theory as the guiding theoretical framework. The paper demonstrates the planning stage of our second action research cycle.

Management scienceComputer scienceAlgorithmic learning theoryActive learningContext (language use)Action researchRobot learningAction learningInductive programmingLearning sciencesProceedings of the 12th Koli Calling International Conference on Computing Education Research
researchProduct

Measure, category and learning theory

1995

Measure and category (or rather, their recursion theoretical counterparts) have been used in Theoretical Computer Science to make precise the intuitive notion “for most of the recursive sets.” We use the notions of effective measure and category to discuss the relative sizes of inferrible sets, and their complements. We find that inferrible sets become large rather quickly in the standard hierarchies of learnability. On the other hand, the complements of the learnable sets are all large.

Preference learningRecursionTheoretical computer scienceLearnabilitySample exclusion dimensionComputer scienceConcept learningAlgorithmic learning theoryMeasure (mathematics)Recursive tree
researchProduct

On the impact of forgetting on learning machines

1995

People tend not to have perfect memories when it comes to learning, or to anything else for that matter. Most formal studies of learning, however, assume a perfect memory. Some approaches have restricted the number of items that could be retained. We introduce a complexity theoretic accounting of memory utilization by learning machines. In our new model, memory is measured in bits as a function of the size of the input. There is a hierarchy of learnability based on increasing memory allotment. The lower bound results are proved using an unusual combination of pumping and mutual recursion theorem arguments. For technical reasons, it was necessary to consider two types of memory : long and sh…

Theoretical computer scienceActive learning (machine learning)Computer scienceSemi-supervised learningMutual recursionArtificial IntelligenceInstance-based learningHierarchyForgettingKolmogorov complexitybusiness.industryLearnabilityAlgorithmic learning theoryOnline machine learningInductive reasoningPumping lemma for regular languagesTerm (time)Computational learning theoryHardware and ArchitectureControl and Systems EngineeringArtificial intelligenceSequence learningbusinessSoftwareCognitive psychologyInformation SystemsJournal of the ACM
researchProduct